CS168: The Modern Algorithmic Toolbox Lecture #14: Linear and Convex Programming, with Applications to Sparse Recovery

نویسندگان

  • Tim Roughgarden
  • Gregory Valiant
چکیده

Recall the setup in compressive sensing. There is an unknown signal z ∈ R, and we can only glean information about z through linear measurements. We choose m linear measurements a1, . . . , am ∈ R. “Nature” then chooses a signal z, and we receive the results b1 = 〈a1, z〉, . . . , bm = 〈am, z〉 of our measurements, when applied to z. The goal is then to recover z from b. Last lecture culminated in the following sparse recovery guarantee for compressive sensing.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

CS168: The Modern Algorithmic Toolbox Lecture #18: Linear and Convex Programming, with Applications to Sparse Recovery

Recall the setup in compressive sensing. There is an unknown signal z ∈ R, and we can only glean information about z through linear measurements. We choose m linear measurements a1, . . . , am ∈ R. “Nature” then chooses a signal z, and we receive the results b1 = 〈a1, z〉, . . . , bm = 〈am, z〉 of our measurements, when applied to z. The goal is then to recover z from b. Last lecture culminated i...

متن کامل

CS168: The Modern Algorithmic Toolbox Lecture #14: Markov Chain Monte Carlo

The previous lecture covered several tools for inferring properties of the distribution that underlies a random sample. In this lecture we will see how to design distributions and sampling schemes that will allow us to solve problems we care about. In some instances, the goal will be to understand an existing random process, and in other instances, the problem we hope to solve has no intrinsic ...

متن کامل

CS168: The Modern Algorithmic Toolbox Lecture #19: Expander Codes

The first half of the lecture explains an affirmative answer to this question. The second half connects the question to the design of error-correcting codes. What exactly do we mean by “sparse” and “highly connected?” The answers are familiar. One common definition of a sparse graph, as seen in CS161, is that the number m of edges is O(n), where n is the number of vertices. In this lecture we’l...

متن کامل

CS168: The Modern Algorithmic Toolbox Lecture #10: Tensors, and Low-Rank Tensor Recovery

Definition 1.1 A n1× n2× . . .× nk k-tensor is a set of n1 · n2 · . . . · nk numbers, which one interprets as being arranged in a k-dimensional hypercube. Given such a k-tensor, A, we can refer to a specific element via Ai1,i2,...,ik . A 2-tensor is simply a matrix, with Ai,j referring to the i, jth entry. You should think of a n1×n2×n3 3-tensor as simply a stack of n3 matrices, where each matr...

متن کامل

CS168: The Modern Algorithmic Toolbox Lecture #5: Sampling and Estimation

This week, we will cover tools for making inferences based on random samples drawn from some distribution of interest (e.g. a distribution over voter priorities, customer behavior, ip addresses, etc.). We will also learn how to use sampling techniques to solve hard problems— both problems that inherently involve randomness, as well as those that do not. As a warmup, to get into the probabilisti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015